Goto

Collaborating Authors

 iconicity rating


The Visual Iconicity Challenge: Evaluating Vision-Language Models on Sign Language Form-Meaning Mapping

Keleş, Onur, Özyürek, Aslı, Ortega, Gerardo, Gökgöz, Kadir, Ghaleb, Esam

arXiv.org Artificial Intelligence

Iconicity, the resemblance between linguistic form and meaning, is pervasive in signed languages, offering a natural testbed for visual grounding. For vision-language models (VLMs), the challenge is to recover such essential mappings from dynamic human motion rather than static context. We introduce the Visual Iconicity Challenge, a novel video-based benchmark that adapts psycholinguistic measures to evaluate VLMs on three tasks: (i) phonological sign-form prediction (e.g., handshape, location), (ii) transparency (inferring meaning from visual form), and (iii) graded iconicity ratings. We assess 13 state-of-the-art VLMs in zero- and few-shot settings on Sign Language of the Netherlands and compare them to human baselines. On phonological form prediction, VLMs recover some handshape and location detail but remain below human performance; on transparency, they are far from human baselines; and only top models correlate moderately with human iconicity ratings. Interestingly, models with stronger phonological form prediction correlate better with human iconicity judgment, indicating shared sensitivity to visually grounded structure. Our findings validate these diagnostic tasks and motivate human-centric signals and embodied learning methods for modelling iconicity and improving visual grounding in multimodal models.


EdGCon: Auto-assigner of Iconicity Ratings Grounded by Lexical Properties to Aid in Generation of Technical Gestures

Hossain, Sameena, Kamboj, Payal, Maity, Aranyak, Azuma, Tamiko, Banerjee, Ayan, Gupta, Sandeep K. S.

arXiv.org Artificial Intelligence

Gestures that share similarities in their forms and are related in their meanings, should be easier for learners to recognize and incorporate into their existing lexicon. In that regard, to be more readily accepted as standard by the Deaf and Hard of Hearing community, technical gestures in American Sign Language (ASL) will optimally share similar in forms with their lexical neighbors. We utilize a lexical database of ASL, ASL-LEX, to identify lexical relations within a set of technical gestures. We use automated identification for 3 unique sub-lexical properties in ASL- location, handshape and movement. EdGCon assigned an iconicity rating based on the lexical property similarities of the new gesture with an existing set of technical gestures and the relatedness of the meaning of the new technical word to that of the existing set of technical words. We collected 30 ad hoc crowdsourced technical gestures from different internet websites and tested them against 31 gestures from the DeafTEC technical corpus. We found that EdGCon was able to correctly auto-assign the iconicity ratings 80.76% of the time.